Skip to content

Conversation

@qma
Copy link
Contributor

@qma qma commented Oct 15, 2024

Summary:
Since master has migrated aot_compiler to use to_edge_transform_and_lower in a previous change #6026, quantization XNNPack options can be enabled by default for the following models:

  • Quantized ViT
  • Quantized Mobilebert
  • Quantized Emformer Predict
  • Quantized Emformer Transcribe

Differential Revision: D64081319

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 15, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6242

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 15, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64081319

@qma qma force-pushed the export-D64081319 branch from 98aa423 to a435029 Compare October 16, 2024 18:19
qma added a commit to qma/executorch that referenced this pull request Oct 16, 2024
pytorch#6242)

Summary:

Since master has migrated aot_compiler to use to_edge_transform_and_lower in a previous change pytorch#6026, quantization XNNPack options can be enabled by default for the following models:

- Quantized ViT
- Quantized Mobilebert
- Quantized Emformer Predict
- Quantized Emformer Transcribe

Reviewed By: digantdesai

Differential Revision: D64081319
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64081319

qma added a commit to qma/executorch that referenced this pull request Oct 16, 2024
pytorch#6242)

Summary:

Since master has migrated aot_compiler to use to_edge_transform_and_lower in a previous change pytorch#6026, quantization XNNPack options can be enabled by default for the following models:

- Quantized ViT
- Quantized Mobilebert
- Quantized Emformer Predict
- Quantized Emformer Transcribe

Reviewed By: digantdesai

Differential Revision: D64081319
@qma qma force-pushed the export-D64081319 branch from a435029 to 2221ff4 Compare October 16, 2024 18:25
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64081319

pytorch#6242)

Summary:

Since master has migrated aot_compiler to use to_edge_transform_and_lower in a previous change pytorch#6026, quantization XNNPack options can be enabled by default for the following models:

- Quantized ViT
- Quantized Mobilebert
- Quantized Emformer Predict
- Quantized Emformer Transcribe

Reviewed By: digantdesai

Differential Revision: D64081319
@qma qma force-pushed the export-D64081319 branch from 2221ff4 to 7406523 Compare October 16, 2024 20:10
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64081319

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 5f12f28.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants